摘要 :
Hopfield Neural Networks (HNNs) are an important class of neural networks that are useful in pattern recognition and the capacity is an important criterion for such a network design. In this research, we study the capacity experim...
展开
Hopfield Neural Networks (HNNs) are an important class of neural networks that are useful in pattern recognition and the capacity is an important criterion for such a network design. In this research, we study the capacity experimentally determined by Hopfield and also highlight the upper and (lower) bounds on it. Improvements in the capacity are also determined as well as the methods for achieving the higher capacity. HNN capacity enhancements and refinements may also inspire other models.
收起
摘要 :
Hopfield Neural Networks (HNNs) are an important class of neural networks that are useful in pattern recognition and the capacity is an important criterion for such a network design. In this research, we study the capacity experim...
展开
Hopfield Neural Networks (HNNs) are an important class of neural networks that are useful in pattern recognition and the capacity is an important criterion for such a network design. In this research, we study the capacity experimentally determined by Hopfield and also highlight the upper and (lower) bounds on it. Improvements in the capacity are also determined as well as the methods for achieving the higher capacity. HNN capacity enhancements and refinements may also inspire other models.
收起
摘要 :
The continuous Hopfield network (CHN) is a classical neural network model. It can be used to solve some classification and optimization problems in the sense that the equilibrium points of a differential equation system associated...
展开
The continuous Hopfield network (CHN) is a classical neural network model. It can be used to solve some classification and optimization problems in the sense that the equilibrium points of a differential equation system associated to the CHN is the solution to those problems. The Euler method is the most widespread algorithm to obtain these CHN equilibrium points, since it is the simplest and quickest method to simulate complex differential equation systems. However, this method is highly sensitive with respect to initial conditions and it requires a lot of CPU time for medium or greater size CHN instances. In order to avoid these shortcomings, a new algorithm which obtains one equilibrium point for the CHN is introduced in this paper. It is a variable time-step method with the property that the convergence time is shortened; moreover, its robustness with respect to initial conditions will be proven and some computational experiences will be shown in order to compare it with the Euler method.
收起
摘要 :
Complex-valued Hopfield neural networks (CVHNNs) are available for storage of multilevel data, such as gray-scale images. Such networks have low noise tolerance. This is a severe problem for their applications. To improve the nois...
展开
Complex-valued Hopfield neural networks (CVHNNs) are available for storage of multilevel data, such as gray-scale images. Such networks have low noise tolerance. This is a severe problem for their applications. To improve the noise tolerance, we have to study pseudomemories. In the case of one training pattern, CVHNNs have only rotated patterns as pseudomemories. There are many rotated patterns. This is considered the reason why CVHNNs have low noise tolerance. In the present paper, we investigate the pseudomemories of two-dimensional multistate Hopfield neural networks, including the complex-valued ones, with multiple training patterns. Computer simulations show that there are many pseudomemories other than the rotated patterns. (C) 2016 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
收起
摘要 :
In this paper, the Hopfield Neural Network (HNN) model is used to estimate the urban Orientation-Destination (OD) distribution matrix from the link volumes of the transportation network so as to promote the solving speed and precision.
摘要 :
The authors extend the analysis of the block truncation coding (BTC) algorithm using a Hopfield neural network (HNN). They show that its performance is suboptimum (in the mean square error sense) and that alternative (non-neural n...
展开
The authors extend the analysis of the block truncation coding (BTC) algorithm using a Hopfield neural network (HNN). They show that its performance is suboptimum (in the mean square error sense) and that alternative (non-neural network) BTC algorithms are available with virtually the same performance.
收起
摘要 :
A complex-valued Hopfield neural network (CHNN), a multistate Hopfield model, is useful for processing
multilevel data, such as image data. Several alternatives of CHNN have been proposed. A hyperbolicvalued
Hopfield neural netw...
展开
A complex-valued Hopfield neural network (CHNN), a multistate Hopfield model, is useful for processing
multilevel data, such as image data. Several alternatives of CHNN have been proposed. A hyperbolicvalued
Hopfield neural network (HHNN) improves the noise tolerance of CHNN. In this work, we propose
a synthetic Hopfield neural network (SHNN), a combination of a CHNN and an HHNN. An SHNN is the first
combination of Hopfield models using different algebras. Since a CHNN and an HHNN have different
operators, such as addition and multiplication, they are represented by matrices and vectors to compose
an SHNN. A CHNN and an HHNN have different global minima. Only the common global minima are the
global minima of SHNN. Thus, an SHNN is expected to improve the noise tolerance. In fact, computer simulations
support our expectation.
收起
摘要 :
Hopfield neural networks on scale-free networks display the power law relation between the stability of patterns and the number of patterns. The stability is measured by the overlap between the output state and the stored pattern ...
展开
Hopfield neural networks on scale-free networks display the power law relation between the stability of patterns and the number of patterns. The stability is measured by the overlap between the output state and the stored pattern which is presented to a neural network. In simulations the overlap declines to a constant by a power law decay. Here we provide the explanation for the power law behavior through the signal-to-noise ratio analysis. We show that on sparse networks storing a plenty of patterns the stability of stored patterns can be approached by a power law function with the exponent -0.5. There is a difference between analytic and simulation results that the analytic results of overlap decay to 0. The difference exists because the signal and noise term of nodes diverge from the mean-field approach in the sparse finite size networks.
收起
摘要 :
A complex-valued Hopfield neural network (CHNN) is a multistate Hopfield model and has been applied to the storage of image data. It has the weak noise tolerance due to the inherent property of rotational invariance. A hyperbolic-...
展开
A complex-valued Hopfield neural network (CHNN) is a multistate Hopfield model and has been applied to the storage of image data. It has the weak noise tolerance due to the inherent property of rotational invariance. A hyperbolic-valued Hopfield neural network (HHNN) resolves rotational invariance and improves the noise tolerance. A rotor Hopfield neural network (RHNN) is an extension of CHNN and the weights are represented by matrices. It provides excellent noise tolerance by resolving the rotational invariance. However, an RHNN needs double weight parameters of a CHNN, unlike an HHNN. In this work, we propose a diagonal RHNN (DRHNN), which restricts the weights of RHNN to diagonal matrices and reduces the number of weight parameters. The number of weight parameters in a DRHNN, an HHNN, and a CHNN is same. In addition, a DRHNN resolves the rotational invariance and provides excellent noise tolerance like an RHNN. (C) 2020 Elsevier B.V. All rights reserved.
收起
摘要 :
Adleman's 1994 proof that DNA oligomers using specific molecular reactions can be used to solve the Hamiltonian Path Problem suggested the possibility of massively parallel processing power, remarkable energy efficiency and compac...
展开
Adleman's 1994 proof that DNA oligomers using specific molecular reactions can be used to solve the Hamiltonian Path Problem suggested the possibility of massively parallel processing power, remarkable energy efficiency and compact data storage ability for this new type of computation. The Boolean architecture of the first DNA computers and the fact that DNA hybridization reactions can be error prone indicates that some form of fault tolerance or error correction would be beneficial in any large scale applications. In this study, we demonstrate the operation of a four dimensional Hopfield associative memory storing two memories as an archetype fault tolerant neural network implemented using DNA molecular reactions. The response of the network compares favorably to a computer simulation and suggests that the protocols could be scaled to a network of significantly larger dimensions.
收起